27 research outputs found

    Truncated covariance matrices and Toeplitz methods in Gaussian processes

    Get PDF
    Gaussian processes are a limit extension of neural networks. Standard Gaussian process techniques use a squared exponential covariance function. Here, the use of truncated covariances is proposed. Such covariances have compact support. Their use speeds up matrix inversion and increases precision. Furthermore they allow the use of speedy, memory efficient Toeplitz inversion for high dimensional grid based Gaussian process predictors. 1 Introduction Gaussian process methods are a natural extension of Bayesian neural network approaches. However Gaussian processes suffer from the need to invert an n \Theta n matrix, where n is the number of data points. This takes o(n 3 ) floating point operations. For many real life problems, there is some control over how data is collected, and this data often takes a regular form. For example data can be collected at regular time intervals or at points on a grid (e.g. video pictures) . Often this structure can be used to ensure covariance matrices ..

    Generalised propagation for fast Fourier transforms with partial or missing data

    Get PDF
    Discrete Fourier transforms and other related Fourier methods have been practically implementable due to the fast Fourier transform (FFT). However there are many situations where doing fast Fourier transforms without complete data would be desirable. In this paper it is recognised that formulating the FFT algorithm as a belief network allows suitable priors to be set for the Fourier coefficients. Furthermore efficient generalised belief propagation methods between clusters of four nodes enable the Fourier coefficients to be inferred and the missing data to be estimated in near to O(n log n) time, where n is the total of the given and missing data points. This method is compared with a number of common approaches such as setting missing data to zero or to interpolation. It is tested on generated data and for a Fourier analysis of a damaged audio signal.

    Image modeling with position-encoding dynamic trees

    Get PDF
    Abstract This paper describes the Position-Encoding Dynamic Tree (PEDT). The PEDT is a probabilistic model for images which improves on the Dynamic Tree by allowing the positions of objects to play a part in the model. This increases the flexibility of the model over the Dynamic Tree and allows the positions of objects to be located and manipulated. The paper motivates and defines this form of probabilistic model using the belief network formalism. A structured variational approach for inference and learning in the PEDT is developed, and the resulting variational updates are obtained, along with additional implementation considerations which ensure the computational cost scales linearly in the number of nodes of the belief network. The PEDT model is demonstrated and compared with the dynamic tree and fixed tree. The structured variational learning method is compared with mean field approaches

    An analysis of the local optima storage capacity of Hopfield network based fitness function models

    Get PDF
    A Hopfield Neural Network (HNN) with a new weight update rule can be treated as a second order Estimation of Distribution Algorithm (EDA) or Fitness Function Model (FFM) for solving optimisation problems. The HNN models promising solutions and has a capacity for storing a certain number of local optima as low energy attractors. Solutions are generated by sampling the patterns stored in the attractors. The number of attractors a network can store (its capacity) has an impact on solution diversity and, consequently solution quality. This paper introduces two new HNN learning rules and presents the Hopfield EDA (HEDA), which learns weight values from samples of the fitness function. It investigates the attractor storage capacity of the HEDA and shows it to be equal to that known in the literature for a standard HNN. The relationship between HEDA capacity and linkage order is also investigated

    Food for pollinators: quantifying the nectar and pollen resources of urban flower meadows

    Get PDF
    Planted meadows are increasingly used to improve the biodiversity and aesthetic amenity value of urban areas. Although many ‘pollinator-friendly’ seed mixes are available, the floral resources these provide to flower-visiting insects, and how these change through time, are largely unknown. Such data are necessary to compare the resources provided by alternative meadow seed mixes to each other and to other flowering habitats. We used quantitative surveys of over 2 million flowers to estimate the nectar and pollen resources offered by two exemplar commercial seed mixes (one annual, one perennial) and associated weeds grown as 300m2 meadows across four UK cities, sampled at six time points between May and September 2013. Nectar sugar and pollen rewards per flower varied widely across 65 species surveyed, with native British weed species (including dandelion, Taraxacum agg.) contributing the top five nectar producers and two of the top ten pollen producers. Seed mix species yielding the highest rewards per flower included Leontodon hispidus, Centaurea cyanus and C. nigra for nectar, and Papaver rhoeas, Eschscholzia californica and Malva moschata for pollen. Perennial meadows produced up to 20x more nectar and up to 6x more pollen than annual meadows, which in turn produced far more than amenity grassland controls. Perennial meadows produced resources earlier in the year than annual meadows, but both seed mixes delivered very low resource levels early in the year and these were provided almost entirely by native weeds. Pollen volume per flower is well predicted statistically by floral morphology, and nectar sugar mass and pollen volume per unit area are correlated with flower counts, raising the possibility that resource levels can be estimated for species or habitats where they cannot be measured directly. Our approach does not incorporate resource quality information (for example, pollen protein or essential amino acid content), but can easily do so when suitable data exist. Our approach should inform the design of new seed mixes to ensure continuity in floral resource availability throughout the year, and to identify suitable species to fill resource gaps in established mixes

    Super-resolution:A comprehensive survey

    Get PDF

    Charles Bonnet Syndrome:Evidence for a Generative Model in the Cortex?

    Get PDF
    Several theories propose that the cortex implements an internal model to explain, predict, and learn about sensory data, but the nature of this model is unclear. One condition that could be highly informative here is Charles Bonnet syndrome (CBS), where loss of vision leads to complex, vivid visual hallucinations of objects, people, and whole scenes. CBS could be taken as indication that there is a generative model in the brain, specifically one that can synthesise rich, consistent visual representations even in the absence of actual visual input. The processes that lead to CBS are poorly understood. Here, we argue that a model recently introduced in machine learning, the deep Boltzmann machine (DBM), could capture the relevant aspects of (hypothetical) generative processing in the cortex. The DBM carries both the semantics of a probabilistic generative model and of a neural network. The latter allows us to model a concrete neural mechanism that could underlie CBS, namely, homeostatic regulation of neuronal activity. We show that homeostatic plasticity could serve to make the learnt internal model robust against e.g. degradation of sensory input, but overcompensate in the case of CBS, leading to hallucinations. We demonstrate how a wide range of features of CBS can be explained in the model and suggest a potential role for the neuromodulator acetylcholine. This work constitutes the first concrete computational model of CBS and the first application of the DBM as a model in computational neuroscience. Our results lend further credence to the hypothesis of a generative model in the brain

    MFDTs: mean field dynamic trees

    No full text
    corecore